he demonstration to show how classifier robustness can be analysed. The upper
w a classifier with a greater degree of the overlap between two prediction
The discrimination power of the classifier is less robust against the threshold
The lower panels show a classifier with a light degree of overlap between two
densities. The discrimination power of this classifier is thus more robust against
ld variation. The curves stand for the prediction densities and the shaded areas
e misclassifications. The dots stand for the thresholds.
oubt, it is not straightforward to check the overlap between two
of two classes shown in Figure 3.11 for the purpose of the
s measurement of a classifier. Therefore, the receiver operating
istic (ROC) curve has been developed for the classifier
ss evaluation [Hanley and McNeil, 1982]. Using this technique, a
for the discrimination between two classes is varied from the
o the highest (the threshold variation). At this time, the
tion performance is recorded correspondingly. The latter will
long with the threshold variation. Therefore ROC is a technique
ne how the classification accuracy variation is sensitive against
hold variation.
nically, two measurements are used in an ROC analysis. They are
positive rate and the true positive rate. The false positive rate is
he horizontal axis of an ROC space and the true positive rate is
he vertical axis of an ROC space. Therefore, an ROC plot is a
ensional visualisation of the robustness of a classifier.
e 3.12(a) shows an example. Suppose seven thresholds have been
ven pairs of classification performance measures (the false
rates and the true positive rates) are obtained. They are then
onto the two-dimensional space shown in Figure 3.12(b). Each